59 research outputs found

    Scoring Systems: Levels of Abstraction

    Get PDF

    Consent to Targeted Advertising

    Get PDF
    Targeted advertising in digital markets involves multiple actors collecting, exchanging, and processing personal data for the purpose of capturing users’ attention in online environments. This ecosystem has given rise to considerable adverse effects on individuals and society, resulting from mass surveillance, the manipulation of choices and opinions, and the spread of addictive or fake messages. Against this background, this article critically discusses the regulation of consent in online targeted advertising. To this end, we review EU laws and proposals and consider the extent to which a requirement of informed consent may provide effective consumer protection. On the basis of such an analysis, we make suggestions for possible avenues that may be pursued

    Algorithmic fairness through group parities? The case of COMPAS-SAPMOC

    Get PDF
    Machine learning classifiers are increasingly used to inform, or even make, decisions significantly affecting human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and addressing unfairness in algorithmic decision-making. This paper critically discusses the adoption of group-parity criteria (e.g., demographic parity, equality of opportunity, treatment equality) as fairness standards. To this end, we evaluate the use of machine learning methods relative to different steps of the decision-making process: assigning a predictive score, linking a classification to the score, and adopting decisions based on the classification. Throughout our inquiry we use the COMPAS system, complemented by a radical simplification of it (our SAPMOC I and SAPMOC II models), as our running examples. Through these examples, we show how a system that is equally accurate for different groups may fail to comply with group-parity standards, owing to different base rates in the population. We discuss the general properties of the statistics determining the satisfaction of group-parity criteria and levels of accuracy. Using the distinction between scoring, classifying, and deciding, we argue that equalisation of classifications/decisions between groups can be achieved thorough group-dependent thresholding. We discuss contexts in which this approach may be meaningful and useful in pursuing policy objectives. We claim that the implementation of group-parity standards should be left to competent human decision-makers, under appropriate scrutiny, since it involves discretionary value-based political choices. Accordingly, predictive systems should be designed in such a way that relevant policy goals can be transparently implemented. Our paper presents three main contributions: (1) it addresses a complex predictive system through the lens of simplified toy models; (2) it argues for selective policy interventions on the different steps of automated decision-making; (3) it points to the limited significance of statistical notions of fairness to achieve social goals

    E-Health: Criminal Liability and Automation

    Get PDF
    Questo lavoro di ricerca indaga i problemi relativi alla responsabilità penale legata all’uso di sistemi di automazione e d'intelligenza artificiale nel settore dell’e-health. Tale indagine ù stata svolta inquadrando il sistema sanitario all’interno di una visione socio-tecnica, con particolare attenzione all’interazione tra uomo e macchina, al livello di automazione dei sistemi e al concetto di errore e gestione del rischio. Sono state approfondite alcune specifiche aree di interesse quali: la responsabilità penale per danno da dispositivi medici difettosi; la responsabilità medica, connessa all’uso di sistemi a elevata automazione e legata a difetti del sistema; e, in particolare, la responsabilità penale legata all’uso di sistemi d’intelligenza artificiale e i modelli elaborati dalla dottrina per regolare tale fenomeno. Sono stati esaminati: il modello zoologico, il modello dell’agente mediato, il modello della conseguenza naturale e probabile e il modello della responsabilità diretta. Si esamina la possibilità che un agente autonomo intelligente sia in grado di soddisfare i requisiti dell’actus reus e della mens rea, quali condizioni necessarie all’attribuzione di responsabilità penale, qualora un AI ponga in essere una condotta astrattamente riconducibile a una fattispecie criminosa. I profili di responsabilità sono analizzati sulla base di casi e scenari e infine si cerca di evidenziare possibili soluzioni e rimedi, anche alla luce della teoria degli agenti normativi.This research thesis investigates all the issues related to the criminal liability that arise when highly automated and/or artificial intelligence systems are used in e-Health. This investigation has been conducted looking at the health system with a socio-technical point of view, paying specific attention to the human-machine interaction, the specific level of automation involved, and finally to concepts of error and risk management. Some topics over the others have been deeply examined, e.g. product liability for defective medical devices; medical liability in case of highly automated systems with defects; criminal liability in presence of artificial intelligence systems, along with the doctrine models developed to cope with these issues. The following models have been analysed: the zoological model, the perpetration through another model, the natural and probable consequences model, and finally the direct liability model. The existence of the criminal requirements, actus reus and mens rea, as mandatory elements to identify the criminal liability, has also been investigated. All the liability profiles have been analysed using real world case and scenarios. Eventually, some solution and remedies have been proposed as a conclusion, using also the theory elements of normative agents

    Defeasible Systems in Legal Reasoning: A Comparative Assessment

    Get PDF
    Different formalisms for defeasible reasoning have been used to represent legal knowledge and to reason with it. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: Defeasible Logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare features of these approaches from three perspectives: the logical model (knowledge representation), the method (computational mechanisms), and the technology (available software). On this basis, we identify and apply criteria for assessing their suitability for legal applications. We discuss the different approaches through a legal running example

    Unsupervised Factor Extraction from Pretrial Detention Decisions by Italian and Brazilian Supreme Courts

    Get PDF
    Pretrial detention is a debated and controversial measure since it is an exception to the principle of the presumption of innocence. To determine whether and to what extent legal systems make exces- sive use of pretrial detention, an empirical analysis of judicial practice is needed. The paper presents some preliminary results of experimental re- search aimed at identifying the relevant factors on the basis of which Ital- ian and Brazilian Supreme Courts impose the measure. To analyze and extract the relevant predictive-features, we rely on unsupervised learn- ing approaches, in particular association and clustering methods. As a result, we found common factors between the two legal systems in terms of crime, location, grounds for appeal, and judge’s reasoning

    Argumentation and Defeasible Reasoning in the Law

    Get PDF
    Different formalisms for defeasible reasoning have been used to represent knowledge and reason in the legal field. In this work, we provide an overview of the following logic-based approaches to defeasible reasoning: defeasible logic, Answer Set Programming, ABA+, ASPIC+, and DeLP. We compare features of these approaches under three perspectives: the logical model (knowledge representation), the method (computational mechanisms), and the technology (available software resources). On top of that, two real examples in the legal domain are designed and implemented in ASPIC+ to showcase the benefit of an argumentation approach in real-world domains. The CrossJustice and Interlex projects are taken as a testbed, and experiments are conducted with the Arg2P technology

    AI in search of unfairness in consumer contracts : the terms of service landscape

    Get PDF
    Published online: 18 July 2022This article explores the potential of artificial intelligence for identifying cases where digital vendors fail to comply with legal obligations, an endeavour that can generate insights about business practices. While heated regulatory debates about online platforms and AI are currently ongoing, we can look to existing horizontal norms, especially concerning the fairness of standard terms, which can serve as a benchmark against which to assess business-to-consumer practices in light of European Union law. We argue that such an assessment can to a certain extent be automated; we thus present an AI system for the automatic detection of unfair terms in business-to-consumer contracts, a system developed as part of the CLAUDETTE project. On the basis of the dataset prepared in this project, we lay out the landscape of contract terms used in different digital consumer markets and theorize their categories, with a focus on five categories of clauses concerning (i) the limitation of liability, (ii) unilateral changes to the contract and/or service, (iii) unilateral termination of the contract, (iv) content removal, and (v) arbitration. In so doing, the paper provides empirical support for the broader claim that AI systems for the automated analysis of textual documents can offer valuable insights into the practices of online vendors and can also provide valuable help in their legal qualification. We argue that the role of technology in protecting consumers in the digital economy is critical and not sufficiently reflected in EU legislative debates.Francesca Lagioia has been supported by the European Research Council (ERC) Project “CompuLaw” (Grant Agreement No 833647) under the European Union’s Horizon 2020 research and innovation programme, and by the SCUDO project, within the POR-FESR 2014-2020 programme of Regione Toscana. Agnieszka JabƂonowska has been supported by the National Science Center in Poland (Grant Agreement UMO-2019/35/B/HS5/04444). This work has been supported by the Claudette (CLAUseDETecTEr) project, funded by the Research Council of the European University Institute and from the Bureau EuropĂ©en des Unions de Consommateurs (BEUC)

    Detecting and explaining unfairness in consumer contracts through memory networks

    Get PDF
    Published online: 11 May 2021Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the classification accuracy, but are also able to offer meaningful, natural language explanations of otherwise opaque classifier outcomes. Sponsor information: Francesca Lagioia has been supported by the European Research Council (ERC) Project “CompuLaw” (Grant Agreement No. 833647) under the European Union’s Horizon 2020 research and innovation programme. Paolo Torroni has been partially supported by the H2020 Project AI4EU (Grant Agreement No. 825619). Marco Lippi would like to thank NVIDIA Corporation for the donation of the Titan X Pascal GPU used for this research.Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement
    • 

    corecore